研究表明,自治车辆(AVS)在由人类驱动因素组成的交通环境中保守,不适应当地条件和社会文化规范。众所周知,如果存在理解人类驱动程序的行为,则可以设计社会意识的AVS。我们提出了一种利用机器学习来预测人类驱动程序的行为的方法。这类似于人类如何隐含地解释道路上司机的行为,只能观察其车辆的轨迹。我们使用图形理论工具从轨迹和机器学习中提取驾驶员行为特征,以在流量和驾驶员行为中获得车辆的提取轨迹之间的计算映射。与此域中的现有方法相比,我们证明我们的方法是强大的,一般的,并且可扩展到广泛的应用程序,如自主导航。我们评估我们在美国,印度,中国和新加坡捕获的现实世界交通数据集以及模拟中的方法。
translated by 谷歌翻译
我们提出了一种涉及无罪交叉口,环形交叉路口和合并期间的人类驱动因素和自治车辆(AVS)的多智能经纪人规划的新方法。在多代理规划中,主要挑战是预测其他代理人,特别是人类驱动因素的行为,因为他们的意图隐藏着其他代理人。我们的算法使用博弈论开发一个名为GamePlan的新拍卖,这直接基于其驱动风格来确定每个代理的最佳动作(这是可通过常用传感器观察到的)。 GamePlan为更具攻击性或不耐烦的驱动程序和更低的优先级分配更高的优先级,以及更多保守或患者司机的优先级;理论上,我们证明了这种方法是游戏 - 理论上最佳地防止冲突和死锁。我们将我们的方法与先前的最先进的拍卖技术进行比较,包括经济拍卖,基于时间的拍卖(先进先出)和随机竞标,并表明这些方法中的每一种都会导致代理商之间的碰撞帐户驱动程序行为。我们另外与基于深度加强学习,深度学习和博弈理论的方法进行比较,并呈现我们对这些方法的好处。最后,我们表明我们的方法可以在现实世界与人类驱动程序实施。
translated by 谷歌翻译
我们提出了GANAV,这是一种新颖的小组注意机制,可以从RGB图像中识别出越野地形和非结构化环境中的安全和可通道的区域。我们的方法根据其可通道的语义分割根据其可通道水平对地形进行了分类。我们新颖的小组注意力损失使任何骨干网络都能明确关注具有低空间分辨率的不同组的特征。与现有的SOTA方法相比,我们的设计可提供有效的推断,同时保持高度的准确性。我们对RUGD和Rellis-3D数据集的广泛评估表明,GANAV在RUGD上的改善对SOTA MIOU的改善增长了2.25-39.05%,Rellis-3d的RUGD提高了5.17-19.06%。我们与Ganav进行了深入的增强基于学习的导航算法的接口,并在现实世界中的非结构化地形中突出了其在导航方面的好处。我们将基于GANAV的导航算法与ClearPath Jackal和Husky Robots集成在一起,并观察到成功率增加了10%,在选择表面最佳的可通道性和4.6-13.9%的表面方面为2-47%在轨迹粗糙度中。此外,加纳夫将禁区的假阳性降低37.79%。代码,视频和完整的技术报告可在https://gamma.umd.edu/offroad/上找到。
translated by 谷歌翻译
我们解决了由具有不同驱动程序行为的道路代理人填充的密集模拟交通环境中的自我车辆导航问题。由于其异构行为引起的代理人的不可预测性,这种环境中的导航是挑战。我们提出了一种新的仿真技术,包括丰富现有的交通模拟器,其具有与不同程度的侵略性程度相对应的行为丰富的轨迹。我们在驾驶员行为建模算法的帮助下生成这些轨迹。然后,我们使用丰富的模拟器培训深度加强学习(DRL)策略,包括一组高级车辆控制命令,并在测试时间使用此策略来执行密集流量的本地导航。我们的政策隐含地模拟了交通代理商之间的交互,并计算了自助式驾驶员机动,例如超速,超速,编织和突然道路变化的激进驾驶员演习的安全轨迹。我们增强的行为丰富的模拟器可用于生成由对应于不同驱动程序行为和流量密度的轨迹组成的数据集,我们的行为的导航方案可以与最先进的导航算法相结合。
translated by 谷歌翻译
Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning-jointly trained wide linear models and deep neural networks-to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.
translated by 谷歌翻译
Embedding words in vector space is a fundamental first step in state-of-the-art natural language processing (NLP). Typical NLP solutions employ pre-defined vector representations to improve generalization by co-locating similar words in vector space. For instance, Word2Vec is a self-supervised predictive model that captures the context of words using a neural network. Similarly, GLoVe is a popular unsupervised model incorporating corpus-wide word co-occurrence statistics. Such word embedding has significantly boosted important NLP tasks, including sentiment analysis, document classification, and machine translation. However, the embeddings are dense floating-point vectors, making them expensive to compute and difficult to interpret. In this paper, we instead propose to represent the semantics of words with a few defining words that are related using propositional logic. To produce such logical embeddings, we introduce a Tsetlin Machine-based autoencoder that learns logical clauses self-supervised. The clauses consist of contextual words like "black," "cup," and "hot" to define other words like "coffee," thus being human-understandable. We evaluate our embedding approach on several intrinsic and extrinsic benchmarks, outperforming GLoVe on six classification tasks. Furthermore, we investigate the interpretability of our embedding using the logical representations acquired during training. We also visualize word clusters in vector space, demonstrating how our logical embedding co-locate similar words.
translated by 谷歌翻译
Agile robotics presents a difficult challenge with robots moving at high speeds requiring precise and low-latency sensing and control. Creating agile motion that accomplishes the task at hand while being safe to execute is a key requirement for agile robots to gain human trust. This requires designing new approaches that are flexible and maintain knowledge over world constraints. In this paper, we consider the problem of building a flexible and adaptive controller for a challenging agile mobile manipulation task of hitting ground strokes on a wheelchair tennis robot. We propose and evaluate an extension to work done on learning striking behaviors using a probabilistic movement primitive (ProMP) framework by (1) demonstrating the safe execution of learned primitives on an agile mobile manipulator setup, and (2) proposing an online primitive refinement procedure that utilizes evaluative feedback from humans on the executed trajectories.
translated by 谷歌翻译
When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.
translated by 谷歌翻译
Despite recent progress towards scaling up multimodal vision-language models, these models are still known to struggle on compositional generalization benchmarks such as Winoground. We find that a critical component lacking from current vision-language models is relation-level alignment: the ability to match directional semantic relations in text (e.g., "mug in grass") with spatial relationships in the image (e.g., the position of the mug relative to the grass). To tackle this problem, we show that relation alignment can be enforced by encouraging the directed language attention from 'mug' to 'grass' (capturing the semantic relation 'in') to match the directed visual attention from the mug to the grass. Tokens and their corresponding objects are softly identified using the cross-modal attention. We prove that this notion of soft relation alignment is equivalent to enforcing congruence between vision and language attention matrices under a 'change of basis' provided by the cross-modal attention matrix. Intuitively, our approach projects visual attention into the language attention space to calculate its divergence from the actual language attention, and vice versa. We apply our Cross-modal Attention Congruence Regularization (CACR) loss to UNITER and improve on the state-of-the-art approach to Winoground.
translated by 谷歌翻译
Context is vital for commonsense moral reasoning. "Lying to a friend" is wrong if it is meant to deceive them, but may be morally okay if it is intended to protect them. Such nuanced but salient contextual information can potentially flip the moral judgment of an action. Thus, we present ClarifyDelphi, an interactive system that elicits missing contexts of a moral situation by generating clarification questions such as "Why did you lie to your friend?". Our approach is inspired by the observation that questions whose potential answers lead to diverging moral judgments are the most informative. We learn to generate questions using Reinforcement Learning, by maximizing the divergence between moral judgements of hypothetical answers to a question. Human evaluation shows that our system generates more relevant, informative and defeasible questions compared to other question generation baselines. ClarifyDelphi assists informed moral reasoning processes by seeking additional morally consequential context to disambiguate social and moral situations.
translated by 谷歌翻译